Skip to content

Conversation

ericcurtin
Copy link
Collaborator

Complete llama-pull tool with documentation

Complete llama-pull tool with documentation

Signed-off-by: Eric Curtin <[email protected]>
@ericcurtin
Copy link
Collaborator Author

@slaren @ngxson PTAL

Copy link
Collaborator

@ngxson ngxson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be a nice tool, but one concern is that the tool is not originally asked by users. Therefore, I'm doubt if users will actually know about and use it.

Comment on lines +52 to +56
if (!params.model.hf_repo.empty() && !params.model.docker_repo.empty()) {
LOG_ERR("error: cannot specify both -hf and -dr options\n");
print_usage(argc, argv);
return 1;
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be checked inside common_params_parse, right?

Comment on lines +58 to +59
// Initialize llama backend for download functionality
llama_backend_init();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we need to initialize the inference backend to download the model?

Comment on lines +21 to +22
LOG(" -o, --output PATH output path for downloaded model\n");
LOG(" (default: cache directory)\n");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure that the -o is currently handled this way?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants